Pretty much all of them use some representation framework like this:
after(eight, nine)
chase(dogs, cats)
knows(Anne, thinks(Bill, likes(Charlie, Dave)))
Cognitive models manipulate these sorts of representations
'2. Implementing Symbol Systems in Neurons
'3. Vector operators
Why circular convolution?
Examples:
BLUE
$\circledast$ SQUARE + RED
$\circledast$ CIRCLE
DOG
$\circledast$ AGENT + CAT
$\circledast$ THEME + VERB
$\circledast$ CHASE
Lots of nice properties:
after(eight, nine)
NUMBER
$\circledast$ EIGHT + NEXT
$\circledast$ NINE
knows(Anne, thinks(Bill, likes(Charlie, Dave)))
SUBJ
$\circledast$ ANNE + ACT
$\circledast$ KNOWS + OBJ
$\circledast$ (SUBJ
$\circledast$ BILL + ACT
$\circledast$ THINKS + OBJ
$\circledast$ (SUBJ
$\circledast$ CHARLIE + ACT
$\circledast$ LIKES + OBJ
$\circledast$ DAVE))
RED
is similar to PINK
then RED
$\circledast$ CIRCLE
is similar to PINK
$\circledast$ CIRCLE
.But rather complicated:
The Binding Problem:
RED
$\circledast$ CIRCLE + BLUE
$\circledast$ TRIANGLE
CIRCLE'
.'
is "inverse".RED
$\circledast$ CIRCLE + BLUE
$\circledast$ TRIANGLE
) $\circledast$ CIRCLE'
RED
$\circledast$ CIRCLE
$\circledast$ CIRCLE' + BLUE
$\circledast$ TRIANGLE
$\circledast$ CIRCLE'
RED + BLUE
$\circledast$ TRIANGLE
$\circledast$ CIRCLE'
RED + noise
RED
The Problem of 2:
OBJ1
$\circledast$ (TYPE
$\circledast$ STAR + SIZE
$\circledast$ LITTLE) + OBJ2
$\circledast$ (TYPE
$\circledast$ STAR + SIZE
$\circledast$ BIG) + BESIDE
$\circledast$ OBJ1
$\circledast$ OBJ2
BESIDE
$\circledast$ OBJ1
$\circledast$ OBJ2
= BESIDE
$\circledast$ OBJ2
$\circledast$ OBJ1
S
= RED
$\circledast$ NOUN
VAR
= BALL
$\circledast$ NOUN'
S
$\circledast$ VAR
= RED
$\circledast$ BALL
A fair number of different attempts:
Does this vector approach offer an alternative?
How do we represent a picture?
SHAPE
$\circledast$ ARROW + NUMBER
$\circledast$ ONE + DIRECTION
$\circledast$ UP
We have shown that it's possible to build these sorts of representations up directly from visual stimuli.
In [14]:
from IPython.display import YouTubeVideo
YouTubeVideo('U_Q6Xjz9QHg', width=720, height=400, loop=1, autoplay=0, playlist='U_Q6Xjz9QHg')
Out[14]:
In [15]:
from IPython.display import YouTubeVideo
YouTubeVideo('Q_LRvnwnYp8', width=720, height=400, loop=1, autoplay=0, playlist='Q_LRvnwnYp8')
Out[15]:
S1 = ONE
$\circledast$ P1
S2 = ONE
$\circledast$ P1 + ONE
$\circledast$ P2
S3 = ONE
$\circledast$ P1 + ONE
$\circledast$ P2 + ONE
$\circledast$ P3
S4 = FOUR
$\circledast$ P1
S5 = FOUR
$\circledast$ P1 + FOUR
$\circledast$ P2
S6 = FOUR
$\circledast$ P1 + FOUR
$\circledast$ P2 + FOUR
$\circledast$ P3
S7 = FIVE
$\circledast$ P1
S8 = FIVE
$\circledast$ P1 + FIVE
$\circledast$ P2
what is S9
?
How does Spaun make a guess at the end?
T1 = S2
$\circledast$ S1'
T2 = S3
$\circledast$ S2'
T3 = S5
$\circledast$ S4'
T4 = S6
$\circledast$ S5'
T5 = S8
$\circledast$ S7'
T = (T1 + T2 + T3 + T4 + T5)/5
S9 = S8
$\circledast$ T
S9 = FIVE
$\circledast$ P1 + FIVE
$\circledast$ P2 + FIVE
$\circledast$ P3
This becomes a novel way of manipulating structured information
In [38]:
%pylab inline
import nengo
import nengo_spa as spa
In [41]:
# Number of dimensions for the Semantic Pointers
dimensions = 32
model = spa.Network(label="Simple question answering")
with model:
colour_in = spa.Transcode(colour_input, output_vocab=dimensions)
shape_in = spa.Transcode(shape_input, output_vocab=dimensions)
cue = spa.Transcode(cue_input, output_vocab=dimensions)
conv = spa.State(dimensions)
out = spa.State(dimensions)
# Connect the buffers
colour_in * shape_in >> conv
conv * ~cue >> out
The input will switch every 0.5 seconds between RED
and BLUE
. In the same way the shape input switches between CIRCLE
and SQUARE
. Thus, the network will bind alternatingly RED * CIRCLE
and BLUE * SQUARE
for 0.5 seconds each.
The cue for deconvolving bound semantic pointers cycles through CIRCLE
, RED
, SQUARE
, and BLUE
within one second.
In [42]:
def colour_input(t):
if (t // 0.5) % 2 == 0:
return 'RED'
else:
return 'BLUE'
def shape_input(t):
if (t // 0.5) % 2 == 0:
return 'CIRCLE'
else:
return 'SQUARE'
def cue_input(t):
sequence = ['0', 'CIRCLE', 'RED', '0', 'SQUARE', 'BLUE']
idx = int((t // (1. / len(sequence))) % len(sequence))
return sequence[idx]
In [43]:
with model:
model.config[nengo.Probe].synapse = nengo.Lowpass(0.03)
p_colour_in = nengo.Probe(colour_in.output)
p_shape_in = nengo.Probe(shape_in.output)
p_cue = nengo.Probe(cue.output)
p_conv = nengo.Probe(conv.output)
p_out = nengo.Probe(out.output)
In [44]:
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/blue_red_spa.py.cfg")
In [45]:
sim = nengo.Simulator(model)
sim.run(3.)
In [46]:
plt.figure(figsize=(10, 10))
vocab = model.vocabs[dimensions]
plt.subplot(5, 1, 1)
plt.plot(sim.trange(), spa.similarity(sim.data[p_colour_in], vocab))
plt.legend(vocab.keys(), fontsize='x-small')
plt.ylabel("colour")
plt.subplot(5, 1, 2)
plt.plot(sim.trange(), spa.similarity(sim.data[p_shape_in], vocab))
plt.legend(vocab.keys(), fontsize='x-small')
plt.ylabel("shape")
plt.subplot(5, 1, 3)
plt.plot(sim.trange(), spa.similarity(sim.data[p_cue], vocab))
plt.legend(vocab.keys(), fontsize='x-small')
plt.ylabel("cue")
plt.subplot(5, 1, 4)
for pointer in ['RED * CIRCLE', 'BLUE * SQUARE']:
plt.plot(sim.trange(), vocab.parse(pointer).dot(sim.data[p_conv].T), label=pointer)
plt.legend(fontsize='x-small')
plt.ylabel("convolved")
plt.subplot(5, 1, 5)
plt.plot(sim.trange(), spa.similarity(sim.data[p_out], vocab))
plt.legend(vocab.keys(), fontsize='x-small')
plt.ylabel("output")
plt.xlabel("time [s]");
In [37]:
sum(ens.n_neurons for ens in model.all_ensembles) #Total number of neurons
Out[37]:
In [ ]: